130 research outputs found

    Learning Co-Sparse Analysis Operators with Separable Structures

    Get PDF
    In the co-sparse analysis model a set of filters is applied to a signal out of the signal class of interest yielding sparse filter responses. As such, it may serve as a prior in inverse problems, or for structural analysis of signals that are known to belong to the signal class. The more the model is adapted to the class, the more reliable it is for these purposes. The task of learning such operators for a given class is therefore a crucial problem. In many applications, it is also required that the filter responses are obtained in a timely manner, which can be achieved by filters with a separable structure. Not only can operators of this sort be efficiently used for computing the filter responses, but they also have the advantage that less training samples are required to obtain a reliable estimate of the operator. The first contribution of this work is to give theoretical evidence for this claim by providing an upper bound for the sample complexity of the learning process. The second is a stochastic gradient descent (SGD) method designed to learn an analysis operator with separable structures, which includes a novel and efficient step size selection rule. Numerical experiments are provided that link the sample complexity to the convergence speed of the SGD algorithm.Comment: 11 pages double column, 4 figures, 3 table

    Separable Cosparse Analysis Operator Learning

    Get PDF
    The ability of having a sparse representation for a certain class of signals has many applications in data analysis, image processing, and other research fields. Among sparse representations, the cosparse analysis model has recently gained increasing interest. Many signals exhibit a multidimensional structure, e.g. images or three-dimensional MRI scans. Most data analysis and learning algorithms use vectorized signals and thereby do not account for this underlying structure. The drawback of not taking the inherent structure into account is a dramatic increase in computational cost. We propose an algorithm for learning a cosparse Analysis Operator that adheres to the preexisting structure of the data, and thus allows for a very efficient implementation. This is achieved by enforcing a separable structure on the learned operator. Our learning algorithm is able to deal with multidimensional data of arbitrary order. We evaluate our method on volumetric data at the example of three-dimensional MRI scans.Comment: 5 pages, 3 figures, accepted at EUSIPCO 201

    On The Sample Complexity of Sparse Dictionary Learning

    Get PDF
    In the synthesis model signals are represented as a sparse combinations of atoms from a dictionary. Dictionary learning describes the acquisition process of the underlying dictionary for a given set of training samples. While ideally this would be achieved by optimizing the expectation of the factors over the underlying distribution of the training data, in practice the necessary information about the distribution is not available. Therefore, in real world applications it is achieved by minimizing an empirical average over the available samples. The main goal of this paper is to provide a sample complexity estimate that controls to what extent the empirical average deviates from the cost function. This estimate then provides a suitable estimate to the accuracy of the representation of the learned dictionary. The presented approach exemplifies the general results proposed by the authors in Sample Complexity of Dictionary Learning and other Matrix Factorizations, Gribonval et al. and gives more concrete bounds of the sample complexity of dictionary learning. We cover a variety of sparsity measures employed in the learning procedure.Comment: 4 pages, submitted to Statistical Signal Processing Workshop 201

    Sample Complexity of Dictionary Learning and other Matrix Factorizations

    Get PDF
    Many modern tools in machine learning and signal processing, such as sparse dictionary learning, principal component analysis (PCA), non-negative matrix factorization (NMF), KK-means clustering, etc., rely on the factorization of a matrix obtained by concatenating high-dimensional vectors from a training collection. While the idealized task would be to optimize the expected quality of the factors over the underlying distribution of training vectors, it is achieved in practice by minimizing an empirical average over the considered collection. The focus of this paper is to provide sample complexity estimates to uniformly control how much the empirical average deviates from the expected cost function. Standard arguments imply that the performance of the empirical predictor also exhibit such guarantees. The level of genericity of the approach encompasses several possible constraints on the factors (tensor product structure, shift-invariance, sparsity \ldots), thus providing a unified perspective on the sample complexity of several widely used matrix factorization schemes. The derived generalization bounds behave proportional to log(n)/n\sqrt{\log(n)/n} w.r.t.\ the number of samples nn for the considered matrix factorization techniques.Comment: to appea

    The Iray Light Transport Simulation and Rendering System

    Full text link
    While ray tracing has become increasingly common and path tracing is well understood by now, a major challenge lies in crafting an easy-to-use and efficient system implementing these technologies. Following a purely physically-based paradigm while still allowing for artistic workflows, the Iray light transport simulation and rendering system allows for rendering complex scenes by the push of a button and thus makes accurate light transport simulation widely available. In this document we discuss the challenges and implementation choices that follow from our primary design decisions, demonstrating that such a rendering system can be made a practical, scalable, and efficient real-world application that has been adopted by various companies across many fields and is in use by many industry professionals today

    Apprentissage de dictionnaire pour les représentations parcimonieuses

    Get PDF
    This is an abstract of the full preprint available at http://hal.inria.fr/hal-00918142/National audienceA popular approach within the signal processing and machine learning communities consists in modelling high-dimensional data as sparse linear combinations of atoms selected from a dictionary. Given the importance of the choice of the dictionary for the operational deployment of these tools, a growing interest for \emph{learned} dictionaries has emerged. The most popular dictionary learning techniques, which are expressed as large-scale matrix factorization through the optimization of a non convex cost function, have been widely disseminated thanks to extensive empirical evidence of their success and steady algorithmic progress. Yet, until recently they remained essentially heuristic. We will present recent work on statistical aspects of sparse dictionary learning, contributing to the characterization of the excess risk as a function of the number of training samples. The results cover non only sparse dictionary learning but also a much larger class of constrained matrix factorization problems.La modélisation de données de grande dimension comme combinaisons linéaires parcimonieuses d'atomes d'un dictionnaire est devenu un outil très populaire en traitement du signal et de l'image. Etant donné l'importance du choix du dictionnaire pour le déploiement opérationnel de ces outils, des approches basées sur l'apprentissage à partir d'une collection ont connu un bel essor. Les techniques les plus populaires abordent le problème sous l'angle de la factorisation de grandes matrices via la minimisation d'une fonction de coût non-convexe. Si des progrès importants en terme d'efficacité algorithmique ont favorisé leur diffusion, ces approches restaient jusqu'à récemment essentiellement empiriques. Nous présenterons des travaux récents abordant les aspects statistiques de ces techniques et contribuant à caractériser l'excès de risque en fonction du nombre d'exemples disponibles. Les résultats couvrent non seulement l'apprentissage de dictionnaire pour les représentations parcimonieuses, mais également une classe sensiblement plus large de factorisations de matrices sous contraintes

    The value of multiple data set calibration versus model complexity for improving the performance of hydrological models in mountain catchments

    Get PDF
    The assessment of snow, glacier, and rainfall runoff contribution to discharge in mountain streams is of major importance for an adequate water resource management. Such contributions can be estimated via hydrological models, provided that the modeling adequately accounts for snow and glacier melt, as well as rainfall runoff. We present a multiple data set calibration approach to estimate runoff composition using hydrological models with three levels of complexity. For this purpose, the code of the conceptual runoff model HBV-light was enhanced to allow calibration and validation of simulations against glacier mass balances, satellite-derived snow cover area and measured discharge. Three levels of complexity of the model were applied to glacierized catchments in Switzerland, ranging from 39 to 103 km2. The results indicate that all three observational data sets are reproduced adequately by the model, allowing an accurate estimation of the runoff composition in the three mountain streams. However, calibration against only runoff leads to unrealistic snow and glacier melt rates. Based on these results, we recommend using all three observational data sets in order to constrain model parameters and compute snow, glacier, and rain contributions. Finally, based on the comparison of model performance of different complexities, we postulate that the availability and use of different data sets to calibrate hydrological models might be more important than model complexity to achieve realistic estimations of runoff composition

    Acute Effects of 3,4-Methylenedioxymethamphetamine and Methylphenidate on Circulating Steroid Levels in Healthy Subjects

    Get PDF
    3,4-Methylenedioxymethamphetamine (MDMA, 'ecstasy') and methylphenidate are widely used psychoactive substances. MDMA primarily enhances serotonergic neurotransmission, and methylphenidate increases dopamine but has no serotonergic effects. Both drugs also increase norepinephrine, resulting in sympathomimetic properties. Here we studied the effects of MDMA and methylphenidate on 24-h plasma steroid profiles. Sixteen healthy subjects (eight men, eight women) were treated with single doses of MDMA (125 mg), methylphenidate (60 mg), MDMA + methylphenidate, and placebo on four separate days using a cross-over study design. Cortisol, cortisone, corticosterone, 11-dehydrocorticosterone, aldosterone, 11-deoxycorticosterone, dehydroepiandrosterone (DHEA), dehydroepiandrosterone sulfate (DHEAS), androstendione, and testosterone were repeatedly measured up to 24-h using liquid-chromatography tandem mass-spectroscopy. MDMA significantly increased the plasma concentrations of cortisol, corticosterone, 11-dehydrocorticosterone, and 11-deoxycorticosterone and also tended to moderately increase aldosterone levels compared with placebo. MDMA also increased the sum of cortisol + cortisone and the cortisol/cortisone ratio, consistent with an increase in glucocorticoid production. MDMA did not alter the levels of cortisone, DHEA, DHEAS, androstendione, or testosterone. Methylphenidate did not affect any of the steroid concentrations, and it did not change the effects of MDMA on circulating steroids. In summary, the serotonin releaser MDMA has acute effects on circulating steroids. These effects are not observed after stimulation of the dopamine and norepinephrine systems with methylphenidate. The present findings support the view that serotonin rather than dopamine and norepinephrine mediates the acute pharmacologically-induced stimulation of the hypothalamic-pituitary-adrenal axis in the absence of other stressors. © 2014 S. Karger AG, Basel

    Die Idee dahinter ... : Aspekte zur Gestaltung lernreicher Lehre

    Get PDF
    Der Band umfasst zahlreiche Beispiele von Lehrenden, die ihre Veranstaltungen in mehreren Aspekten ‚lernreich(er)’ gestaltet haben. Die Konzepte wurden alle im Rahmen des Vertiefungsmoduls des Programms „Professionelle Lehrkompetenz für die Hochschule“ des Netzwerks "hochschuldidaktik nrw" an der Universität Siegen entwickelt oder weiterentwickelt. Die elf Beiträge umfassen ein breites Spektrum an Veranstaltungsformaten und Fächern: Natur- und Ingenieurwissenschaften sind ebenso vertreten wie Architektur, Pädagogik, Soziale Arbeit und Literaturwissenschaft. Bei den Veranstaltungen handelt es sich um Praktika, Seminare, Übungen usw., oft mit Projektcharakter bzw. -elementen, häufig auch mit wechselnden Lernorten, semester-begleitend oder kompakt
    corecore